The Polygraph Place

Thanks for stopping by our bulletin board.
Please take just a moment to register so you can post your own questions
and reply to topics. It is free and takes only a minute to register. Just click on the register link


  Polygraph Place Bulletin Board
  Professional Issues - Private Forum for Examiners ONLY
  New Adobe BS Detector (Page 1)

Post New Topic  Post A Reply
profile | register | preferences | faq | search

This topic is 2 pages long:   1  2  next newest topic | next oldest topic
Author Topic:   New Adobe BS Detector
Ted Todd
Member
posted 12-08-2012 02:13 PM     Click Here to See the Profile for Ted Todd     Edit/Delete Message

Dan,

Perhaps this type of polygraph exam would be more to your liking?

Merry CHRISTmas to all!

Ted

http://www.youtube.com/watch?feature=player_detailpage&v=gjsT-z16vR0

IP: Logged

Dan Mangan
Member
posted 12-10-2012 09:51 AM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Ted,

When someone asks you how accurate polygraph is, what do you tell them?

How accurate do you (yourself) feel polygraph is for the truthful test-taker?

Happy Kwanzaa,
Dan

IP: Logged

sackett
Moderator
posted 12-10-2012 11:34 AM     Click Here to See the Profile for sackett   Click Here to Email sackett     Edit/Delete Message
Dan,

I have held my thoughts for a while but after your last posting here feel compelled to post.

After reading your many comments (where you find time to address these many issues I can’t understand – I’m too busy to play on this site daily), I remain curious about your motivations for your numerous postings.

You profess to be an examiner, but seemingly do not support or believe the basis of your own profession. It seems you challenge almost every assertion made about the efficacy and effectiveness of the polygraph; almost in what seems to be a self loathing manner. I think I understand what seems to be your attempt in some cases to make polygraph better by making the profession self-assess, self-critique and defend, but your postings seem to be overly negative. This intrigues me.

On a side note, I do agree with you in some areas; especially the effort by many within the profession to overly “scientificate” the profession. Ever since the NAS attacked polygraph as less than scientific and since then some have attempted to take this art, supported by science, and make it provable as a science to those who I believe can never and will never accept the premise that polygraph works for the reasons promoted. But to what is your end…?

Not a criticism, just an observation based in curiosity.


Jim

IP: Logged

Dan Mangan
Member
posted 12-10-2012 12:53 PM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Jim,

Your feelings are understandable.

But the answer is simple: I'm a polygraph realist.

Outside the microcosm of the polygraph industry -- and the associated military-industrial, intelligence and law enforcement complexes -- there seems to be precious little belief in and support for polygraph.

Why is that?

Are we right and they're wrong?

It's easy to get overly invested in things.

I think it's important to keep an open mind.

Further, I think that polygraph test-takers should be made aware of the risks, realities and limitations of polygraph. Is that bad? If so, why?

Sometimes, it strikes me that polygraph is almost a kind of religion. And if you aren't a believer, then you somehow don't belong.

That perception doesn't fit with the scientific model all that well.

Dan


IP: Logged

Barry C
Member
posted 12-10-2012 08:07 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
Why should anybody "believe in" polygraph if it's mostly an art? (I don't even think that makes logical sense, but I'll play the game.)

IP: Logged

Ted Todd
Member
posted 12-10-2012 08:11 PM     Click Here to See the Profile for Ted Todd     Edit/Delete Message
Dan,

This post was a joke. Not a dig. Even though you and I frequently disagree, I do beleive there is a certain amount of "art" in what we do.

Ted

IP: Logged

Dan Mangan
Member
posted 12-10-2012 09:23 PM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Ted:

No sweat -- I was only hijacking the forum and stirring the pot.

Seriously, my Backster bro', no worries.

But, Ted, you dodged my questions. So, for that, go fetch The Chief a doughnut!

Barry:

Why should anybody "believe in" polygraph at all?

Because the indu$try says it "works"?

Because we have anecdotes?

Because NAS said -- based on indu$try-supplied "studies" (most of which are weak) -- that polygraph "can discriminate lying from truth-telling at rates well above chance, though well below perfection"?

That's kind of like Rush Limbaugh when he says, tongue in cheek, that he's "almost always right 97.9% of the time."

You're a big cheerleader for the evidence-based model. Show the world some INDEPENDENT evidence that polygraph "works".

Dan

IP: Logged

Barry C
Member
posted 12-10-2012 11:28 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
Define independent. It must mean something different to you than it does to me.

IP: Logged

Dan Mangan
Member
posted 12-11-2012 10:33 AM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Barry,

There may be different forms of an independent approach, but let me revisit one...

I think an independent study should be a FIELD study conducted by an entity with no vested interest, or any connection whatsoever, to polygraph.

CBS' 60 Minutes, in their 1986 sting operation, demonstrated some shortcomings of polygraph "science" in their version of an independent study.

For those who are unfamiliar with this episode, here's how GM describes it on the A-P site:

quote:
In this 1986 exposé of the polygraph trade, CBS 60 Minutes set up a test in which three polygraph examiners chosen at random from the New York telephone directory were asked to administer polygraph examinations to four different employees of the CBS-owned magazine, Popular Photography, regarding the theft of a camera and lens. In fact, no theft had occurred.

Each polygrapher was told that a different employee was suspected as the likely culprit. In each case, the polygrapher found the person who had been fingered to be deceptive.


So, in the 60 Minutes piece, polygraph's accuracy was zero. Yes, they only conducted four "tests," but even if they had conducted 10, and the remaining six examiners "got it right," accuracy would be just 60%.

I know, I know. Things have evolved. We're better now. We have improved standards, better training, advanced algorithms, yada yada yada.

Quick sidebar:
In another thread we discussed the trick stim test. At least one APA director is on record here saying he has no problem with an examiner using trickery -- not only on the test subject, but in front of an audience of future lawyers.

Am I the only one here that thinks such conduct is unethical?

I bring up the trickery angle for this reason: To counter the inevitable hue and cry from our fellow forum denizens about the 60 Minutes feature being a bogus exercise.

That kind of thinking suggests that it's OK for the examiner to trick the test subject, but it's not OK for the independent research entity to trick the examiner.

If polygraph is robustly scientific, shouldn't the test hold up, 1) without a trick stim test; and 2) even if the examiner is misled abut the case circumstances?

Another element of the 60 Minutes piece that should be an essential part of any independent polygraph study is the random selection of (unwitting)examiners.

Under those guidelines, I suspect that the resulting accuracy findings would be far lower than the lofty numbers claimed in the indu$try's insider meta-analysis.

Dan

IP: Logged

Barry C
Member
posted 12-11-2012 11:09 AM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
Okay. No time now. "Independent research" means a person did it him or herself. If it's really my research, it's independent. That's different from "disinterested," but I don't know how anybody could be completely disinterested.

More later....

IP: Logged

Barry C
Member
posted 12-11-2012 11:11 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
quote:
So, in the 60 Minutes piece, polygraph's accuracy was zero. Yes, they only conducted four "tests," but even if they had conducted 10, and the remaining six examiners "got it right," accuracy would be just 60%.

This is not correct. Doug Williams was kind enough to gloat about this in his book.

Here's the breakdown:

Kevin Cassidy of Spartan Security Services tested four people and concluded one was deceptive. (On 60 Minutes, they - I think - only mention three.)

Joe Diaz of Intelligence Services, Inc., tested two and found one deceptive.

Ed Sullivan of Sterling Polygraph Services tested three and found one deceptive. Then Doug claims he went in and passed after being given the camera that wasn't stolen, which he claims is an error. I think that's it. Doug is hard to follow at times. In any event, he didn't steal it, so it was correct.

So, out of 10 tests, 3 were errors (FPs), meaning 70% accuracy.

This was pre-EPPA, and Winblatt, owner of Sterling, was making $2,000,000 a year with 10 polygraph examiners:

quote:
If it begins to look like your firm's business might decline, it may be time to reexamine exactly what you're offering customers.

That's what William M. Greenblatt did in 1987 when it appeared his firm's days were numbered. For 12 years, his company, Sterhug Polygraph Systems of New York City, had been providing polygraph services to business customers seeking to evaluate job applicants. At the time, the firm employed 10 full-time polygraph operators and had annual revenues of $2 million.

But that year, Congress passed a law banning the use of polygraph testing to screen applicants for most positions.

"The polygraph industry was always teetering on the edge of being eliminated-even when I first went into it," Greenblatt recalls. "As a matter of fact, in my first year in business, the New York Legislature passed a bill outlawing [polygraph use], but the bill was vetoed."


Here's the link:
http://web.ebsco host.com/ehost/detail?sid=ffd186f1-56ef-4eb1-9654-11c63684fe32%40sessionmgr10&vid=1&hid=17&bdata=JnNpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=a9h&AN=500673

At the time, Diane Sawyer said tests were as much as $150 each. So, Two million divided by 10 examiners equals $200,000 per year each - if they are making the maximum for each test (doubtful). That's over 5 tests per day, 5 days a week for 50 weeks per year.

I never researched whether they were APA members, but even if they were, they sound more like chart rollers. According to Williams, they were all "prestigious" companies. I imagine New York had a lot of fly by nights. (They still have one TV personality....)

The problem with these tests is that everybody was innocent, and the three errors were all the "suspects." That is, before testing, somebody from the show told the examiners who the likely culprit was, and that is whom the examiners identified each time. It seems more like a study in bias - something the NAS said was a concern for all forensic sciences....

quote:
Quick sidebar:
In another thread we discussed the trick stim test. At least one APA director is on record here saying he has no problem with an examiner using trickery -- not only on the test subject, but in front of an audience of future lawyers.

Am I the only one here that thinks such conduct is unethical?


I think it's unnecessary (based on research). I don't know if it's ethical. Is lying ever ethical? That's a question that makes for great discussion in secular and theological training institutions all over the world.

quote:
Another element of the 60 Minutes piece that should be an essential part of any independent polygraph study is the random selection of (unwitting)examiners.

That, however, is unethical. You'll never get it past any IRB. Avital Ginton had a great real-life study, but he only got two cheaters in his study, so his sample was tiny. Great research design, but unethical by today's standards.

[This message has been edited by Barry C (edited 12-11-2012).]

IP: Logged

Dan Mangan
Member
posted 12-12-2012 08:37 AM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Barry,

Thanks for taking the time to get into this.

Are you saying that any attempt to conduct a study using unwitting examiners would be unethical ("by today's standards")?

If so, doesn't that put a blind study -- and by that I mean one in which the examiners are blind -- off the research table? If that's the case, I don't get it.

To clarify, when I speak of randomly chosen examiners, I'm talking about random choice from the APA pool, not "Groganites."

I am aware that the random-choice element complicates things... That is, the study would embrace other variables in the polygraph process (e.g., examiner competence), and not be controlled for the purest manageable form of potential polygraph efficacy.

In spite of that complication, such an approach would be more indicative of what the retail consumer is likely to encounter in real-world circumstances. I think that has value.

Getting back to trickery... You claim you don't know if using a trick stim test on a subject is unethical. Do you "know" if tricking a law-school audience attending a lecture and demonstration on polygraph is unethical?

Dan

IP: Logged

Barry C
Member
posted 12-12-2012 10:26 AM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
quote:
Are you saying that any attempt to conduct a study using unwitting examiners would be unethical ("by today's standards")?

If so, doesn't that put a blind study -- and by that I mean one in which the examiners are blind -- off the research table? If that's the case, I don't get it.


You need informed consent from all participants. That means you would have to do what the medical profession does in double blind studies. That is, tell participant (examiners) that, if they agree, they will be part of a study in which they will be assigned examinees. (They'll know they are part of an experiment. To do otherwise would be unethical.) Some of those examinees will be from "real" cases and some will be study participants (examinees). The examiner, in theory, won't know which is which. It would be like telling people you're going to get a pill, but you won't know if it's a sugar pill or a real (experimental) medication. Even if you got examiners to agree, I don't know how you'd fool people who know they are going to get sham cases. I would think they would be qualitatively different from the real cases and be rather obvious to most examiners.

Random selection is necessary for a study to be generalizable, and it is a real concern with any study. Without randomization, you have a non-probability sample, which makes the study, pretty much, a case study. (A lot of what we have are case studies.) It's not perfect, but it is not intellectually wrong to argue that multiple case studies from various places, with various examiners, etc, argue in favor of the points on which the studies converge. Of course, the case studies still would need to have a representative sample to make the argument. If you only use "experts," then you can only make conclusions about the expert portion of the population of examiners.

I may not fully understand your research design, but people would have to know they are part of a study no matter what it is. I don't think that's a problem. The NAS actually suggests testing to occur by mixing known (test) cases in with analysts' regular cases. They would know up front that it is just part of the job, but that's different from a study in which you're recruiting volunteers. If an employee doesn't want to be tested, he or she can find another job. A study participant needs to be able to say he or she doesn't want to play or wants to quit.

quote:
Getting back to trickery... You claim you don't know if using a trick stim test on a subject is unethical. Do you "know" if tricking a law-school audience attending a lecture and demonstration on polygraph is unethical?

I don't like it, but I can understand the argument: when you do a live presentation of something fallible in order to show what it looks like when it works, then you want to show what it looks like when it works and take precautions (even "tricks") to display what you want. We do that in training exercises to make sure participants experience what we want them to experience. It think it's important to tell people of the error rate so they know.

When I do those types of demonstrations, I tell people it's live, and I've never had a error, so I'm probably due. I'm not ready to condemn others yet simply because I disagree. Let's say if I had to pick a side, we'd be standing together.

IP: Logged

clambrecht
Member
posted 12-12-2012 07:21 PM     Click Here to See the Profile for clambrecht   Click Here to Email clambrecht     Edit/Delete Message
Hashing out how to design a study will sharpen all of our skills. So far, we have selected a random sample of examiners with mainstream training. The examiners all agree to participate yet will be "blind" to the specifics of the experiment.

How many examiners are enough to withstand scrutiny?

[This message has been edited by clambrecht (edited 12-12-2012).]

IP: Logged

Dan Mangan
Member
posted 12-12-2012 08:08 PM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Corey,

I'd say the more, the merrier.

Let's think big. Say, 10% of the APA membership, evenly mixed among gummint, state/municipal/local LE, and private examiners.

Yes, the administrative/oversight logistics would be a nightmare...

Doesn't mean we shouldn't try.

But what disinterested entity could actually run this thing?

I'm thinking that the psych department of a big-name university might be a good start.

One thing is for certain: The indu$try must NOT have its fingerprints on this exercise.

Of course, should the results show that polygraph is basically one step above a crap-shoot, the indu$try will attack the methodology.

It could be a case of "damned if you do, damned if you don't." (Involve the indu$try, that is.)

Anyone have any bright ideas on how to map this out?

Dan

[This message has been edited by Dan Mangan (edited 12-12-2012).]

IP: Logged

Barry C
Member
posted 12-12-2012 08:46 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
The point of a probability sample (in this case,a random sample of APA members) is so you don't have to use a lot of people. You'd never get 10%, and the cost would be astronomical. A well designed study can cost six figures.

IP: Logged

Dan Mangan
Member
posted 12-12-2012 09:16 PM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Maybe we could interest Katelyn Sack in forming a UVA posse to run this thing...

IP: Logged

Barry C
Member
posted 12-13-2012 01:55 AM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
You've got to be clear on what the research question is. I think the question is "What is the average accuracy of an APA polygraph examiner conducting a single-issue test?" If that's it, then you design around the question. To discover the answer, you test every APA member. Since you can't do that, you test a random sample of APA examiners.

This is complicated, though. To isolate the examiner variable, you need to hold all the other variables constant: same testing process, examinee, etc. However, that would only tell us the average accuracy of APA examiners when testing that particular examinee. So, you'd need to randomly select a sample of people from the general population so you could generalize results to the general population (and each examiner would have to test each examinee). Will you limit the general population to healthy individuals only? What about individuals diagnosed with depression? The list could go on and on and on, and I've barely started.

This is why it makes sense to do what's been done. That's not to say we don't need more research. We do, but a massive study like this is nearly impossible. It makes more sense to approach it in some other way. There have been some creative suggestions by scientists.

I do think it's worthwhile to brainstorm some good research designs on a smaller level that could actually be funded and carried out. It's just going to have to be a piece at a time.

We really need to do some theory testing. I think we're there - if not, we're close.

IP: Logged

Dan Mangan
Member
posted 12-13-2012 09:47 AM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Just thinkin' out loud...

What about 10 universities around the country running 25 exams each?

Protocols could be established ahead of time.

All the data would be uploaded to the disinterested board of experts. Perhaps we could draw from the NAS committee.

Also, we would need to have mechanisms in place for transparency.

So many obstacles, but so many possibilities.

A herculean task, to be sure Is it possible? Sure. Theoretically, at least.

So is winning PowerBall.

This is like daydreaming about one's future PowerBall winnings...

IP: Logged

Barry C
Member
posted 12-13-2012 10:11 AM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
I think the daydreaming is instructive. It could even blossom into a real research design. It raises the issue that polygraph needs a non-polygraph run research team seeking its own funding and conducting research. You won't ever get completely disinterested parties. Researchers conduct research in areas in which they have an interest. So, you'd end up with those who don't like polygraph as well as those who do. That's okay.

I've wanted to invite some anti-, for lack of a better term, polygraph scientists to the APA for a healthy debate and / or presentations on what they've found, just to start a dialogue and stimulate interest. There's been little interest.

Funny, when research was done in Canada by the opposition, they used real examiners. Word has it everybody was cheating and playing games on both sides (researchers and examiners). So, the level of distrust can cause problems, another variable that would need to be controlled.

IP: Logged

rnelson
Member
posted 12-13-2012 11:15 AM     Click Here to See the Profile for rnelson   Click Here to Email rnelson     Edit/Delete Message
It's a great idea to engage in money-is-no-object creative thinking because it can re-align us with our problems and objectives, and it can gives us perspective on why and how we have solved problems in our imperfect real world.

It's also a great idea to then engage in creative thinking about what we can realistically achieve within our available resources. If we don't do this, then all that creative intellectualizing goes nowhere and amounts to arm-chair quarterbacking.

So with that in mind, one of the things we do when we cannot get the perfect sample that is going to perfectly representative of all examiners or all examinees (which is always the case) is to combine the results of different smaller studies into a larger study. We can start by assuming that each small study is an imperfect representation of the the answer (with both good information and some noise in the results), and we can also assume that the aggregation of a lot of smaller studies will provide a less imperfect representation of the answer than any individual smaller study.

It turns out that we are not the only profession to run into these dilemmas, and this is the reason that meta-analytic research has become quite common.

Think about this, if we want to see a 5% to 10% sampling of all examiners and we have a 3000 or 4000 examiners or so, then we would need about 200 to 400 examiners. Would that give us some confidence that we have reasonably captured a fine-grained cross-section of the range of differences in skills and abilities of different examiners? Perhaps.

BTW, anyone notice that there were 295 different examiners who participated in the studies included in the studies included in the 2011 meta-analytic survey?

.02

------------------
"Gentlemen, you can't fight in here. This is the war room."
--(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)


IP: Logged

Dan Mangan
Member
posted 12-13-2012 12:45 PM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Ray,

quote:
BTW, anyone notice that there were 295 different examiners who participated in the studies included in the studies included in the 2011 meta-analytic survey?

But are they truly representative of rank-and-file APA members? If those 295 include a heavy gummint/LE presence -- with the attendant CEU requirements and QA mechanisms -- I don't see how the sample could be representative of your garden-variety polygraph operator.


Barry:

quote:
I've wanted to invite some anti-, for lack of a better term, polygraph scientists to the APA for a healthy debate and / or presentations on what they've found, just to start a dialogue and stimulate interest. There's been little interest

That's unfortunate. Likewise, I've always felt that hearing some differing views would be instructive.

When I was in industry, we'd pay big money to have experts and consultants rip our products to shreds before bringing anything to market.

BTW, which side shows little interest? The evil "anti" side or the APA home boys?

IP: Logged

Barry C
Member
posted 12-13-2012 02:31 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
Ray jumped ahead a bit in the brainstorming process, but he's right.

I recalculated the work of the committee using a different mathematical approach and found statistical evidence that there isn't a single average effect size (in our case, the effect size is average accuracy), but rather a distribution of effect sizes. That really isn't surprising. What it tells us is that there are differences from one study to the next, and it is probably due to variables we haven't yet isolated. For example, maybe government examiners perform differently from other groups. Maybe some examinees are better candidates (in the sense of better accuracy) than others.

One of the things one could do is code all of the different variables from study to study and try to figure out some of the answers to the questions that remain. That doesn't mean we don't need more studies. But it does suggest what Ray was hinting at a bit: we have a lot of data that probably tells us more than we realize. If we can learn from what we have, we're in a better position to figure out what variables we should begin to isolate in future studies.

I'll give you an example of where I think some of the differences occur. In 1980 a study was conducted using the Backster You-phase test. Three charts were collected from all examinees. Using +/-8 cutoffs, accuracy was as follows: 21 correct, 11 INC and 0 errors. Using +/2 cutoffs, the accuracy was as follows: 29 correct, 2 INC and 3 errors. If the study only used and reported only 1 set of cut-score results, we wouldn't know how accuracy changes from one approach to another. (Scores were also reported for +/-4 and a zero cut-off, and they were scored by the original student examiners and later by experts. The purpose of the study was to learn if psychopaths and alcoholics differed from the so-called normal sample.) All of those variables, including how they are combined, might impact what is "typical" for the average examiner.

That's one reason for standardization and using a scoring system that has been normed using typical examiners. We can be more confident in knowing how the variables play together, and we have a better idea of what any result means.

I think we can be pretty confident we're in the general area of typical accuracy. I'm sure there are variables we could better identify that would provide indicators of when we should be more concerned. If the meta-analysis only contained experts (it didn't), then we could say we have a pretty good idea of what to expect from an expert.

No study will ever be perfect. We just do what we can to minimize bias in the assessment process.

IP: Logged

Barry C
Member
posted 12-13-2012 02:35 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
Dan,

I forgot to answer your final question. Both sides are overly cautious in my experience. They talk outside of professional meetings, but there seems to be some stigma in fraternizing with the enemy. That's not supposed to be part of the scientific approach, but people are involved. Charles Honts has forged some relationships with some skeptical of the criminal justice / forensic science types. So, it does happen, but it's slow.

IP: Logged

Dan Mangan
Member
posted 12-14-2012 09:29 AM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
quote:
Both sides are overly cautious in my experience. They talk outside of professional meetings, but there seems to be some stigma in fraternizing with the enemy. That's not supposed to be part of the scientific approach, but people are involved.

I applaud Barry's candor.

But stigma in fraternizing with the enemy? Really? I'm shocked.

Just kidding.

But I can understand the indu$try's desperation in keeping the little man behind the curtain safely out of sight.

Indeed, it's this us-and-them mentality that helps prop up the rickety playing-card House of Polygraph.

Jokers are wild, and in abundance. "You see, kids, the polygraph knows when you lie."

Why, exactly, are the "antis" perceived as the enemy? Let's look at some possibilities...

o Because they make public our trade secrets? Check.
o Because they are screwing with our rice bowls? Check.
o Because discouraging civil dialogue and a steady exchange of ideas and information with those who disagree with us will facilitate our own efforts at understanding the science behind polygraphy? Umm, 'fraid not.

Well, two out of three ain't bad! In that fight we're batting about .666 -- the mark of the BEAST -- well above chance, though well below perfection.

I say we should get the antis involved. Maschke, Iacano, Feinberg, Vess, Sack, Zellicoff, Rosky, et al. The more, the merrier! Build a bridge. Invite them to our events. Tap their brains. Share information.

But it'll never happen.

Over the years I've heard GM described as a nut case, a pedophile, a spy, a CIA agent, etc. And this is at APA events! Among the thinly veiled snubs: We all know what kind of people are drawn to The Netherlands!

GM has done more to advance polygraph education than anyone else on earth. If you don't believe me, then you haven't drilled down into the bowels of A-P.org and explored the countless links he has provided and organized.

Naturally, he's demonized.

And here, on this forum, if someone asks probing questions, or, God forbid, questions "authority," then the accusations fly: troll, anti-polygraph polygraph examiner, non-believer (that one kills me), etc.

People, this is hardly the stuff of a level-headed scientific approach to understanding the machinations of polygraph "science." Or polygraph "art," for that matter.

Put all of that aside for a moment. Let's do some more PowerBall daydreaming...

Let's say we invite that axis of evil -- George, Gino and Drew -- to a national seminar for the CM challenge. The terrible trio has prepped their contender, a Dolph Lundgren lookalike, straight from his "Ivan Drago" character in Rocky IV, to beat the test.

The CM challenge takes place in a soundproofed room in the seminar hotel, but it's fed via CCTV to a slew of Sony Jumbotrons in the auditoriums so all can witness history in the making.

As the test unfolds, you could hear a pin drop (except for Ray incessantly tapping away on his laptop and Skip chowing down on some veal parm).

The examiner -- in his black-magic psych-out pre-test raindance -- says, "Ivan, the polygraph knows when you lie. I'll show you. Here, pick a number."

[uproarious laughter]

The test continues, and Ivan beats the test like a drum.

After a break, the players take the stage and a step-by-step post-mortem ensues.

Ivan and the terrible trio explain exactly how they did it.

Wouldn't that be great?

One one level, yes. On other levels, it would be an unmitigated disaster.

That's why there will be no bridge, no healthy exchanges, no open dialogue, no nuthin'.

And there sure as hell will be no polygraph equivalent to the Rumble in the Jungle -- or anything close to it.

Why? There's too much to lose, and the polygraph indu$try is too big to fail.

Bottom line: The intellectual stalemate will continue.

That's not necessarily bad news... At least our rice bowls will remain a little bit safer for a while longer.

But they indu$try should stop kidding itself (and others) about its scientific approach.

The antis bring to the table a part of the equation that inextricably linked to understanding the polygraph beast.

Refusing to reach out to the opposition for inclusive debate and mutual fact finding is selfish, shallow and intellectually disingenuous.

Dan

[This message has been edited by Dan Mangan (edited 12-14-2012).]

IP: Logged

rnelson
Member
posted 12-14-2012 11:45 AM     Click Here to See the Profile for rnelson   Click Here to Email rnelson     Edit/Delete Message
I think stalemate is the wrong metaphor.

Cold-war is probably better.

.02

r

------------------
"Gentlemen, you can't fight in here. This is the war room."
--(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)


IP: Logged

clambrecht
Member
posted 12-14-2012 07:20 PM     Click Here to See the Profile for clambrecht   Click Here to Email clambrecht     Edit/Delete Message

The anti polygraph forum post for that challenge is 22 pages long spanning 10 years! From my perspective, I don't think the results would prove or disprove the efficacy of polygraph examinations. I think the APA should respond : "We do not accept the challenge and we fully stipulate that countermeasures exist and probably go unnoticed at times. In fact, every examinee uses some sort of countermeasure to try and do well on the test- even if its simply getting plenty of rest the night before. Because countermeasures do not negate the value of polygraph, we offer the following challenge: ......"
The challenge would be an invitation to participate in a robust polygraph study as examinees and coordinators. Both sides coordinate a mutually agreeable study with peer reviewed results. Those peers should also include those outside of polygraph etc.

Link to their CM challenge:
http://www.antipolygraph.org/cgi-bin/forums/YaBB.pl?num=1012236418

IP: Logged

Barry C
Member
posted 12-14-2012 08:08 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
But they can only supply about 4 people.

IP: Logged

Dan Mangan
Member
posted 12-14-2012 08:24 PM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
quote:
Because countermeasures do not negate the value of polygraph...

Huh? They most certainly do -- as do a host of other factors and "uncertainties."

We should all memorize this quote form Justice Thomas' majority opinion in Scheffer:

quote:
...there is simply no way to know in a particular case whether a polygraph examiner's conclusion is accurate, because certain doubts and uncertainties plague even the best polygraph exams.

If the APA actually believed in their own meta-analysis, they'd accept the CM challenge.

They don't -- as evidenced by the mention of "special populations" in the study's fine print -- so they won't.

Let's get real.

The polygraph indu$try in general, and the APA in particular, are afraid of Maschke and his challenge.

So much for polygraph "science."

IP: Logged

Barry C
Member
posted 12-14-2012 11:23 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
quote:
...there is simply no way to know in a particular case whether a polygraph examiner's conclusion is accurate, because certain doubts and uncertainties plague even the best polygraph exams.

You can replace "polygraph examiner's" with a judge's, jury's, doctor's, whatever. Many, if not most, of their decisions are plagued with doubts and uncertainties. Look how many DNA has exonerated. It wasn't polygraph that failed them. It was eyewitness error (higher error rate than polygraph) and false confessions. And those are the cases proved, allegedly, beyond a doubt (but not all doubt).

I think it was Gordon Barland who explained why the CM challenge was a no win (or lose / lose) for examiners. You can't win. Its like the shell game or three card Monte.

IP: Logged

Gordon H. Barland
Member
posted 12-15-2012 12:21 AM     Click Here to See the Profile for Gordon H. Barland     Edit/Delete Message
Dan,

Regarding your suggestion of doing a blind numbers test live in front of a TV audience, I’ve done the next best thing. KUTV, a TV station in Salt Lake City, regularly aired a popular segment in which their investigative reporters look into viewers’ complaints or questions, and present the results each week on the Friday night newscast. In 2008 they were asked by a viewer if it is possible to beat the polygraph. They contacted me and wanted me to demonstrate the polygraph in a way that would answer the question. I agreed to conduct a test on camera for them in their studio on one of their reporters, Fields Mosely.

The arrangement was that they would tape me running a numbers test with no CMs involved, followed by a discussion on camera of the CMs most advocated on the Internet. This would then be followed by a second numbers test on Mr. Mosely in which he would employ countermeasures to beat me. We agreed that I would run three charts on each exam before calling the number. Their thinking was that the first test would provide a baseline for the accuracy of the polygraph, and the second test would show what happens when countermeasures are used. It would be an entertaining way to answer the viewer’s question.

Both numbers tests would be blind; Mr. Mosely would write one of five numbers (3 through 7) on a piece of paper behind my back and conceal it under his thigh throughout the test. Numbers 1, 2, 8, and 9 were padding. On the first numbers test, it was pretty obvious on the first chart what the selected number was, and the second chart was confirming it. In addition, it was obvious that despite the instructions and his agreement not to use CMs on this test he did so, and rather crudely; I saw the toes of his shoes press down on a specific number on both charts. I stopped the test in mid-chart and announced “You picked the number 7 but you were trying to make me think it was 5.” Mr. Mosely sat silent without changing his expression and without saying a word. It was the longest three or four seconds I had experienced in years! The moderator (off camera) finally asked, “Is that right, Fields?” The reporter said yes. “And were you trying to trick him about 5?” “Yes.”

We then discussed what countermeasures were being recommended on the Internet, which at that time were contracting the anal sphincter, biting the tongue, and thinking arousing thoughts or imagery.

Mr. Mosely and the moderator took a ten minute break to plan their countermeasure strategy for the next test. Upon their return I ran another blind numbers test in which he could choose numbers 23-27 and use CMs on any one of the remaining four numbers. I ran three charts, took a ten minute break to analyze the charts, and on camera made my calls. “You wrote the number 23 and used countermeasures on 26.” Mr. Mosely again was stone faced for several seconds. “You’re right,” he conceded. He later explained (off camera) that during their break to plan strategy, he had decided not to tense his anal sphincter because I had a movement pad on his chair. He practiced biting his tongue in front of a mirror in the studio bathroom and decided against that, so he used a mental countermeasure, thinking of something terrible happening to his son. And the Internet claim is that mental countermeasures cannot be identified.

What was the risk of my doing this on camera? It was posted online by KUTV for several months afterward. If I had been wrong, George Maschke would have trumpeted my failure to the world and would have posted the segment on YouTube. You better believe I was nervous. Very nervous!

Of course this was just an anecdotal N of 1, yet it can be analyzed statistically. I had one chance in 5 (20%) of guessing the selected number correctly on the first test, and again a 20% chance on the second test. The chances of getting both right strictly by guessing, is just one chance in 25 (1/5 x 1/5) or 4%. But, wait…in this case something more happened. I also correctly identified the countermeasured number on both tests. What’s the probability of that? Well, that’s more complicated to calculate. I had seen a slight movement on the first test, but he wasn’t supposed to be using CMs, and it could have been entirely coincidental, but I made the call on camera based upon my clinical experience, and my reputation was fully at stake. Certainly the chances of correctly detecting both numbers on the second test in isolation is 1/5 x ¼ = 1/20 or 5%. If you believe it was also 1/20 on the first test as well, then the combined probability would be just 1/20 x 1/20 or one chance in 400 or 0.25% probability.

Certainly the application of the polygraph is very much a clinical skill, but the polygraph also “works” in a strictly psychophysiological sense.

Peace

Gordon

IP: Logged

Dan Mangan
Member
posted 12-15-2012 09:33 AM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Hi Gordon,

Actually, my (hypothetical) suggestion was to use closed-circuit television so all APA seminar attendees could view the A-P CM challenge being undertaken in a private room at the seminar hotel, but that's neither here nor there...

Thanks very much for refreshing my memory about your experience. I recall reading about it here on Polygraph Place a while back.

Couple of things...

1. You are not a garden-variety polygraph operator.

2. The amount of time your subject had to learn about and practice CMs is not reflective of real-world CM applications.

Here's my CM anecdote...

The public high school in my town requires that student complete a "senior project" in order to graduate. Early in 2011, I was approached by a student who wanted to see if she could beat a polygraph test. At the time, I was very busy launching grenades at this forum, throwing darts at my Ted Todd dartboard, and just getting warmed up for needling Skip and other gummint bureaucrats here on PP, so I declined.

The kid persisted, so I agreed to meet with her and her teacher/mentor to discuss the project.

The three of us met at the school for over an hour. I explained how the CQT "works," described who uses polygraph and why, then took them on a tour of several web sites, including A-P.

After my dog-and-pony show, the teacher/mentor approved of the project.

I agreed to provide the charts and full video for her to do with as she pleased after the test. (Many of the students' projects use multimedia components.)

BTW, I advised the kid that if she wanted any chance at all of beating the test, she should employ mental CMs, as I use the full ensemble of motion sensors, and have two cameras and a mirror strategically placed in my office.

If memory serves, the student had about four weeks to study CMs in preparation for the test. We exchanged a few emails during that interim time when she had questions.

Finally, we agreed on a test date for her to commit a mock crime (or not) and then be tested.

The mock crime involved taking money from a wallet that was in a desk in a room that I no longer used. (At that time, I was changing rooms in my office building, so I had possession of two separate spaces while I completed the transition.)

My instructions for the crime went something like this:

- Go to Room 9 on the second floor.
- Locate the wallet in the desk.
- COUNT the money.
- Take some, all or NONE of the money.
- Report to the polygraph suite.

Because she was 17, I insisted that she be accompanied by a parent. However, she flew solo on the crime phase.

To simplify things a bit, the wallet contained three 20-dollar bills.

I informed the student that I would first run a specific-issue test to determine if she took any of the money. If the test indicated she did, I would then run a PPOT to determine the amount.

By the way, the student is a pretty bright kid. After graduation last year she entered a pre-med program at an Ivy League school. Her mother -- who accompanied her to the test -- is a pediatrician.

First, I ran a specific-issue test (Federal Bi-Zone, I think) on taking the money.

The charts were definitely "conflicted," but they looked DI to me.

Next I ran a PPOT. These charts were rugged, but it looked more like she took $20.

BTW, I ran my analysis in real time with both of them in the office. Maybe 5-10 minutes tops.

With cameras rolling -- she gets the video, remember -- I made my pronouncement: "You stole 20 bucks."

Bingo. (Whew!)

I gotta tell ya, some of them charts wuz TOUGH! The CM-influenced tracings actually overpowered those of the RQs in several instances, BUT they were not timely.

When the kid was answering the CQs, her expression suggested she was really "working." She did not exhibit the sit-back-and-relax demeanor of many truth-tellers I've tested.

But it was close. Real close.

If she had had the luxury of practicing and getting her sense of timing down, I'm confident she could have beaten the test.

I have no idea how diligently she studied mental CMs and "practiced" employing them, or if she simply crammed the night before the test. But if a 17-year-old kid can zero in on the process with such proficiency, I think that very driven individuals with certain resources could succeed a significant percentage of the time.

Dan


[This message has been edited by Dan Mangan (edited 12-15-2012).]

IP: Logged

clambrecht
Member
posted 12-18-2012 07:51 PM     Click Here to See the Profile for clambrecht   Click Here to Email clambrecht     Edit/Delete Message
Have there been polygraph polls that simply ask real world examinees later if they used CMs during a real world exam ? Is there a way to anonymously ask current and former police officers / agents if they used CMs on the poly during the hiring process? They wouldn't be asked to make any disclosures of any past misdeeds, simply yes or no about CMs. I would bet that if there was a method that guarantees anonymity, the results would be accurate. If several agencies here on poly place agreed upon a method that our skeptical coworkers would trust as confidential , maybe we could plan to do it in 2013 and combine the results. I cannot think of a method at this moment that a CM using cop or agent would trust to not get them terminated.

[This message has been edited by clambrecht (edited 12-18-2012).]

IP: Logged

Barry C
Member
posted 12-18-2012 09:00 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
Truthful people can use countermeasures, so what would a survey reveal? What is 100% of the population used CMs but they didn't result in false positives or false negatives? There was one study that revealed about 40 to 50% of truthful examinees used spontaneous CMs. What about people who lied on the test and say they still passed? They may have used CMs, but they may have lied on the CQs. I can think of one case in which a woman said she lied on the test and still passed. It turns out, she lied on the CQs. Whether she tried to control her breathing to "relax" or not, I don't know, but if she did, what would her survey responses tell you?

Did you use CMs? Yes.
Did you lie? Yes
Did you pass? Yes

What does that mean? It could mean at least two different things. Thus, it doesn't tell you what you really want to know.

IP: Logged

Dan Mangan
Member
posted 12-18-2012 09:32 PM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Truthful or not, you'd have to be crazy NOT to use countermeasures on a polygraph "test."

Actually, you'd have to be crazy to even submit to a polygraph "test."

Before you excoriate me, please think about it.

Imagine yourself -- or your kid -- in the hot seat.

quote:
...there is simply no way to know in a particular case whether a polygraph examiner's conclusion is accurate, because certain doubts and uncertainties plague even the best polygraph exams.

-- Justice Clarence Thomas

I agree.


[This message has been edited by Dan Mangan (edited 12-18-2012).]

IP: Logged

clambrecht
Member
posted 12-18-2012 10:10 PM     Click Here to See the Profile for clambrecht   Click Here to Email clambrecht     Edit/Delete Message
Thanks Barry for your thoughts. Constructive criticism from peers will only sharpen my critical thinking skills so thank you. I would ask that you also keep an open mind as I respectfully disagree with you. Such a study would simply contribute to the greater body of knowledge, no grand meanings are intended. This small piece of the puzzle could provide a glimpse at the wide spread nature- or not- of some pre-defined countermeasures.The poll would need some sort of common set of definitions on what CMs are. Waking up early, eating breakfast, quitting marijuana for a week, meditating to relax before the appointment, triple checking your memory to gain confidence would probably not be on that list. So let me offer specifics: My goal/question would be "How prevalent are certain, pre-defined CMs?" My hypothesis would be a prediction that at least x% of examinees have utilized these CMs. The results would be like any other results and discussed. If the poll results show that "400" law enforcement officers out of 1000 admitted to have used these CMs during the hiring process or during an internal exam- many will disagree with you that those results are meaningless. Again, it's simply a piece of a greater picture and any results would demand a rigorous peer review. Such a study would not be intended to determine if the CMs were from a truthful or deceptive person. Any such purposeful non compliance to my instructions during the pretest will prevent someone from being hired. If officers who are guaranteed immunity admit to such CMs years later in a poll, then I find it troubling that so many were not detected. If 10 out 1000 admit it, those results can also be discussed.

[This message has been edited by clambrecht (edited 12-18-2012).]

IP: Logged

Barry C
Member
posted 12-18-2012 10:12 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
As I've said before, the same is true for jury verdicts.

The only empirical evidence we have on truthful people using physical countermeasures is that they either do nothing or lead to more negative scores.

I don't know why were so worried about countermeasures. The current evidence supports the view that without training and practice, nobody is going to " beat" you. With training and practice they don't stand much of a chance. Unless your testing for spies - who conceivably have the resources for training that might work - there is no evidence countermeasures are a problem - except maybe for the truthful who are dumb enough to get bad advice.

How many of you have cases proven to be false negatives? Of those, how many used countermeasures? You'd think if they were such a problem we'd see more. I can't think of one - not that that proves anything. I've seen plenty of dummies who tried things and didn't make out like they had hoped.

IP: Logged

Dan Mangan
Member
posted 12-18-2012 10:20 PM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
quote:
The only empirical evidence we have on truthful people using physical countermeasures is that they either do nothing or lead to more negative scores.

Big deal. What of mental countermeasures?

As noted in a previous post in this thread, a high school kid came "this close" to beating the test like a drum.

Don't underestimate mental CMs.

We don't know what we don't know.

IP: Logged

Barry C
Member
posted 12-18-2012 10:20 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
If 40% admitted to countermeasures, what would that indicate? I'm not sure what you want to know. We have evidence to indicate that already. Isn't the real question whether they are a problem? Dr. Rankin's position is that 100% of the guilty are employing countermeasures (to prevent detection). People use countermeasures in the interview. Doesn't ground truth impact what the results actually mean?

IP: Logged

Barry C
Member
posted 12-18-2012 10:24 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
Close doesn't count. However, your N of one also shows they don't work. Mental countermeasures don't work. Every liar is employing mental countermeasures.

IP: Logged

This topic is 2 pages long:   1  2 

All times are PT (US)

next newest topic | next oldest topic

Administrative Options: Close Topic | Archive/Move | Delete Topic
Post New Topic  Post A Reply
Hop to:

Contact Us | The Polygraph Place

Copyright 1999-2012. WordNet Solutions Inc. All Rights Reserved

Powered by: Ultimate Bulletin Board, Version 5.39c
© Infopop Corporation (formerly Madrona Park, Inc.), 1998 - 1999.